Goto

Collaborating Authors

 public comment process


Bot-Generated Comments on Government Proposals Could Be Useful Someday

Slate

When the Federal Communication Commission asked the public what it thought about its net neutrality rules in 2017, the comments flooded in--including millions submitted under fake names by bot-comment-generators. These missives added no value and raised concerns that people's identities were being stolen. Now everyone from Congressional Republicans to the New York State Attorney General have their sights set on shutting down the bots. But anxiety about the risks of computer-generated comments might go too far. We don't want to allow overblown fears to squelch the development of future killer apps that could improve public participation in regulatory decision making.


The real threat of fake voices in a time of crisis – HYPEREDGE EMBED

#artificialintelligence

Latanya Sweeney is a professor of government and technology in residence at Harvard University's Department of Government, editor-in-chief of Technology Science and the founding director of the Technology Science Initiative and the Data Privacy Lab at the Institute for Quantitative Social Science at Harvard. Max Weiss is a senior at Harvard University and the student who implemented the Deepfake Text experiment. As federal agencies take increasingly stringent actions to try to limit the spread of the novel coronavirus pandemic within the U.S., how can individual Americans and U.S. companies affected by these rules weigh in with their opinions and experiences? Because many of the new rules, such as travel restrictions and increased surveillance, require expansions of federal power beyond normal circumstances, our laws require the federal government to post these rules publicly and allow the public to contribute their comments to the proposed rules online. But are federal public comment websites -- a vital institution for American democracy -- secure in this time of crisis?